In [1]:
%pylab inline


Populating the interactive namespace from numpy and matplotlib

In [2]:
import nltk
import os
from pprint import pprint

1.1 Working with a corpus

Let's scale up from the individual text to whole collections. The term "corpus" is commonly used in text analysis.

The concept of a corpus is helpful because it refers not just to a bunch of texts, but also points to the act of purposeful aggregation that brought those texts together. For example, we use the term "corpus" to refer to the collected works of a particular author. For your research, you may aggregate a corpus based on thematic or temporal criteria which are informed by your research questions and theoretical assumptions. When analyzing a corpus, or interpreting the results of such an analysis, it is important to keep in mind its provenance -- not just its contents.


In [4]:
text_root = '../../data/EmbryoProjectTexts/files'

try:
    assert os.path.exists(text_root)
except AssertionError:
    print "That directory doesn't exist!"

In [24]:
documents = nltk.corpus.PlaintextCorpusReader(text_root, 'https.+')

In [25]:
documents.words()


Out[25]:
[u'By', u'Mandana', u'Minai', u'Published', u':', ...]

Normalization and Filtering

We'll use the normalization and filtering procedures from our last notebook. The code-cell below sets up the functions that we defined earlier.


In [7]:
wordnet = nltk.WordNetLemmatizer()
from nltk.corpus import stopwords
stoplist = stopwords.words('english')

def normalize_token(token):
    """
    Convert token to lowercase, and stem using the Porter algorithm.
    
    Parameters
    ----------
    token : str
    
    Returns
    -------
    token : str
    """
    return wordnet.lemmatize(token.lower())

def filter_token(token):
    """
    Evaluate whether or not to retain ``token``.
    
    Parameters
    ----------
    token : str
    
    Returns
    -------
    keep : bool
    """
    token = token.lower()
    return token not in stoplist and token.isalpha() and len(token) > 2

Simple frequency distributions

We can get the frequency of tokens in our corpus just like we did for a single text, using a FreqDist (frequency distribution).

In NLTK, frequencies and probabilities are usually discussed in terms of "experiments". A frequency distribution records the frequency of specific outcomes (samples) of a repeated experiment. In this case, we are sampling tokens from a text.


In [26]:
word_counts = nltk.FreqDist([normalize_token(token) 
                             for token in documents.words() 
                             if filter_token(token)])

In [27]:
word_counts.plot(20)



In [10]:
document_counts = nltk.FreqDist([
    token    # Each token will be counted a maximum of 1 time per text.
    for fileid in documents.fileids() 
    for token in set(    # There can be no duplicates in a set.
        [normalize_token(token)    # Normalize first!
         for token 
         in documents.words(fileids=[fileid])
         if filter_token(token)]
    )
])

In [14]:
document_counts.plot(70)


In the figure above, we can see that the top ~40 words occur in around 630 texts. We can see precise values using the most_common() function:


In [15]:
document_counts.most_common(10)    # Get the 10 most common words.


Out[15]:
[(u'full', 628),
 (u'creativecommons', 628),
 (u'last', 628),
 (u'biology', 628),
 (u'encyclopedia', 628),
 (u'university', 628),
 (u'http', 628),
 (u'school', 628),
 (u'embryo', 628),
 (u'life', 628)]

It can be useful to examine the number of texts in which a word occurs, to get a better picture of its distribution over the corpus. We can use a FreqDist for this, too.

We modify our logic slightly: for each document, we conver the list of normalized/filtered tokens into a set. This means that each word will be counted only once per text, even if several tokens are present.

It turns out that there are 628 texts in this corpus...

So these are words that occur in every single text in the corpus.


In [28]:
len(documents.fileids())


Out[28]:
628

Metadata

In computational humanities, it is very unusual to analyze a corpus without reference to at least some minimal metadata. The Python package called Tethne provides some useful mechanisms for importing metadata from Zotero RDF and other bibliographic formats.


In [18]:
from tethne.readers import zotero
zotero_export_path = '../../data/EmbryoProjectTexts'
metadata = zotero.read(zotero_export_path, index_by='link', follow_links=False)

Since we indexed our metadata using the "link" field, we can look up metadata for each text using its fileid.


In [29]:
example_fileid = documents.fileids()[0]
print 'This is the fileid:', example_fileid, '\n'
print 'This is the metadata for this fileid:', '\n'
pprint(metadata[example_fileid].__dict__)   # pprint means "pretty print".


This is the fileid: https--____hpsrepository.asu.edu__handle__10776__11335.txt 

This is the metadata for this fileid: 

{'authors_full': [(u'MINAI', u'MANDANA')],
 'date': 2016,
 'documentType': u'journalArticle',
 'journal': u'Embryo Project Online Encyclopedia',
 'link': u'/Users/erickpeirson/methods/data/EmbryoProjectTexts/files/https--____hpsrepository.asu.edu__handle__10776__11335.txt',
 'title': u'Methylmercury and Human Embryonic Development',
 'uri': u'https://hpsrepository.asu.edu/handle/10776/11335'}

Conditional Frequencies

We can use metadata to add dimensionality to our texts. To examine the distribution of a token over time, we can use a ConditionalFreqDist (conditional frequency distribution). Just like the FreqDist, the ConditionalFreqDist records the outcomes (samples) of an experiment. A ConditionalFreqDist also records a label, or condition, for each outcome.

In the example below, we examine the word usage of different authors. The conditions are the author names, and the samples are tokens. We will limit our analysis to four specific tokens: 'organism', 'ivf', 'pluripotent', 'supreme' (doing this for all tokens would be pretty costly).


In [20]:
focal_tokens = ['organism', 'ivf', 'pluripotent', 'supreme']

authorDist = nltk.ConditionalFreqDist([
        (str(author[0]), normalize_token(token))    # (condition, sample)
         for fileid in corpus.fileids()
         for token in corpus.words(fileids=[fileid])
         for author in metadata[fileid].authors
         if filter_token(token)
            and normalize_token(token) in focal_tokens
    ])

In [21]:
authorDist.tabulate()


                                      ivf organism pluripotent supreme 
      (u"BRIND'AMOUR", u'KATHERINE')    1    1   11    2 
              (u"O'BRIEN", u'CEARA')    5    0    1    1 
          (u"O'CONNELL", u'LINDSEY')    0   13    0    0 
          (u"O'CONNOR", u'KATHLEEN')    0    2    0    1 
              (u"O'NEIL", u' ERICA')    0    1    0    0 
               (u"O'NEIL", u'ERICA')    0    1    0    1 
              (u'ABBOUD', u'ALEXIS')    0    6    0    2 
              (u'ANDREI', u'AMANDA')    0    6    0    0 
          (u'ANTONIOS', u'NATHALIE')    4    0    0   19 
          (u'APPLETON', u'CAROLINE')    6    5    0    0 
          (u'ASTON', u'S ALEXANDRA')    0    8    0    0 
             (u'BARANSKI', u'MARCI')    0    4    0    0 
         (u'BARNES', u'M ELIZABETH')    0  108    0    0 
              (u'BARTLETT', u'ZANE')    0   18   26    0 
                (u'BRIGGS', u'JILL')    0    1    0    1 
               (u'BRINKMAN', u'JOE')    0   14    0    0 
        (u'BUETTNER', u'KIMBERLY A')    0    3    0    0 
             (u'CANIGLIA', u'GUIDO')    0    1    0    0 
              (u'CARVALHO', u'TITO')    0   12    0    0 
         (u'CHAPMAN', u'JENNIFER E')  106    0    0   57 
            (u'CHHETRI', u'DIVYASH')    0   25    0    0 
                 (u'CLARK', u' KAL')    0    4    0    0 
           (u'CLAY', u'ANNE SAFIYA')    0    0    1    0 
               (u'COHMER', u' SEAN')    0    5    2    0 
                (u'COHMER', u'SEAN')    0    3    1    0 
(u'COLONNA', u'FEDERICA TURRIZIANI')    0   57    0    0 
        (u'COOPER-ROTH', u'TRISTAN')    0    4    0    0 
                   (u'COX', u'TROY')    0    8    0    0 
           (u'CRAER', u'JENNIFER R')    0    4    0    0 
               (u'CROWE', u'NATHAN')    0    2    0    0 
              (u'DAMEROW', u'JULIA')    0    4    0    0 
          (u'DERUITER', u' CORINNE')    0    2    0    0 
           (u'DERUITER', u'CORINNE')    4   22    0    0 
                (u'DOTY', u' MARIA')    0    2    0    0 
                 (u'DOTY', u'MARIA')    0    5    0    0 
                 (u'DRAGO', u'MARY')    0    5    0    0 
              (u'ELLIOTT', u'STEVE')    0   48    0    0 
           (u'GARCIA', u' BENJAMIN')    1    1    0    0 
              (u'GILSON', u'HILARY')    3    0    0    0 
            (u'GUR-ARIE', u'RACHEL')    0    6    0    6 
           (u'HAMMOND', u'KATHLEEN')    2    2    2    7 
          (u'HASKETT', u'DOROTHY R')    7   16    0    0 
      (u'HASKETT', u'DOROTHY REGAN')    0   12    0    0 
            (u'HASKETT', u'DOROTHY')    0    1    0    0 
         (u'HAUSERMAN', u'SAMANTHA')    0   18    0    0 
           (u'HEATHCOTTE', u'BROCK')   26    0    0   21 
              (u'JACOBSON', u'BRAD')    3    2    0    0 
               (u'JIANG', u'LIJING')   54   22    2    0 
                (u'KEARL', u'MEGAN')    0   14    5    0 
             (u'KELLEY', u'KRISTIN')    0    3    0    0 
              (u'KHOKHAR', u'AROOB')    3    1    4    0 
                 (u'KING', u'JESSE')    0    1    0    0 
                (u'LATOURELLE', u'')   15    0    0    0 
        (u'LATOURELLE', u'JONATHAN')    0    7    0    0 
            (u'LAWRENCE', u'CERA R')    0    5    0    1 
                (u'LOPEZ', u'ANGEL')    0    3    2    0 
                 (u'LOVE', u'KAREN')    0    2    0    0 
                 (u'LOWE', u'JAMES')    0   22    0    0 
                   (u'LY', u'SARAH')    4    5    1    0 
               (u'MAAYAN', u'INBAR')    0   34    3    3 
               (u'MACCORD', u'KATE')    0   22    0    0 
              (u'MADISON', u'PAIGE')    0    2    0    0 
          (u'MAIENSCHEIN', u' JANE')    0    4    0    0 
           (u'MAIENSCHEIN', u'JANE')    0   10    0    0 
            (u'MARTINEZ', u'BRITTA')    0    2    0    0 
             (u'MAY', u' CATHERINE')    0    3    0    0 
             (u'MILLER', u'SHAWN A')    0    1    0    0 
              (u'MINAI', u'MANDANA')    0    2    0    0 
             (u'MISHRA', u'ABHINAV')    0   15    0    0 
            (u'MOELLER', u'KARLA T')    0   10    0    0 
              (u'MOELLER', u'KARLA')    0    6    0    0 
               (u'NAVIS', u'ADAM R')    0    7    0    0 
     (u'NKANSAH-DWAMENA', u'ERNEST')    0    4    0    8 
                (u'PARKER', u'SARA')    0    1    0    0 
          (u'PEIRSON', u'B R ERICK')    0   36    0    0 
           (u'PHILBRICK', u'SAMUEL')   26    1   98    0 
             (u'POTESTAS', u'JESSE')    0    3    0    0 
              (u'PREVOT', u'KARINE')    0    8    0    0 
             (u'RACINE', u'VALERIE')    0   37    0    0 
            (u'RAUP', u' CHRISTINA')    0    0    0    7 
             (u'RAUP', u'CHRISTINA')    0    1    2    6 
                (u'RESNIK', u'JACK')    0    1    0   12 
            (u'ROBERT', u' JASON S')    4    6    1    0 
          (u'ROJAS', u'CHRISTOPHER')    0    5    2    0 
          (u'RUFFENACH', u'STEPHEN')    0    6    0    0 
         (u'SCHEURMANN', u'D BRIAN')    0    3    0    0 
          (u'SCHUERMANN', u' BRIAN')    0    4    0    0 
         (u'SCHUERMANN', u'D BRIAN')    0    3    0    0 
            (u'SEWARD', u'SHERADEN')    0    0    0   16 
              (u'SMITH', u'KAITLIN')    0    1    0    0 
          (u'SUNDERLAND', u'MARY E')    0  139    0    0 
               (u'TADDEO', u'SARAH')   12   28    1    0 
    (u'TANTIBANCHACHAI', u'CHANAPA')    0    5    0   16 
(u'TURRIZIANI-COLONNA', u'FEDERICA')    0    4    0    0 
               (u'ULETT', u'MARK A')    0   39    0    0 
              (u'WELLNER', u'KAREN')    0   31    0    0 
            (u'WOLTER', u'JUSTIN M')    0   19    0    0 
              (u'WOLTER', u'JUSTIN')    0   18    0    0 
                      (u'WU', u'KE')    3    4   52    0 
               (u'YANG', u' JOANNA')    0    5    0    0 
                (u'ZHANG', u' MARK')   69    0    0   38 
                 (u'ZHANG', u'MARK')    0    0    0   32 
                  (u'ZHU', u' TIAN')    0    6    0    0 
                   (u'ZHU', u'TIAN')   37    0    0    0 
                  (u'ZOU', u'YAWEN')    0   48    0    0 

Words over time

We can also use a ConditionalFreqDist to see how tokens are distributed over time. This works just like the distribution of words over authors, except that in this case we will treat our tokens as conditions. Think of it like this: each time we encounter one of the tokens in our list of focal tokens (conditions), we sample the publication date of the text from which it was drawn.


In [22]:
focal_tokens = ['organism', 'ivf', 'pluripotent', 'supreme']
timeDist = nltk.ConditionalFreqDist([
        (normalize_token(token), metadata[fileid].date)
         for fileid in corpus.fileids()
         for token in corpus.words(fileids=[fileid])
         if filter_token(token)
            and normalize_token(token) in focal_tokens
    ])

In [23]:
timeDist.plot()